6 research outputs found

    Artificial Intelligence and Interstitial Lung Disease: Diagnosis and Prognosis.

    Get PDF
    Interstitial lung disease (ILD) is now diagnosed by an ILD-board consisting of radiologists, pulmonologists, and pathologists. They discuss the combination of computed tomography (CT) images, pulmonary function tests, demographic information, and histology and then agree on one of the 200 ILD diagnoses. Recent approaches employ computer-aided diagnostic tools to improve detection of disease, monitoring, and accurate prognostication. Methods based on artificial intelligence (AI) may be used in computational medicine, especially in image-based specialties such as radiology. This review summarises and highlights the strengths and weaknesses of the latest and most significant published methods that could lead to a holistic system for ILD diagnosis. We explore current AI methods and the data use to predict the prognosis and progression of ILDs. It is then essential to highlight the data that holds the most information related to risk factors for progression, e.g., CT scans and pulmonary function tests. This review aims to identify potential gaps, highlight areas that require further research, and identify the methods that could be combined to yield more promising results in future studies

    An Empirical Analysis for Zero-Shot Multi-Label Classification on COVID-19 CT Scans and Uncurated Reports

    Full text link
    The pandemic resulted in vast repositories of unstructured data, including radiology reports, due to increased medical examinations. Previous research on automated diagnosis of COVID-19 primarily focuses on X-ray images, despite their lower precision compared to computed tomography (CT) scans. In this work, we leverage unstructured data from a hospital and harness the fine-grained details offered by CT scans to perform zero-shot multi-label classification based on contrastive visual language learning. In collaboration with human experts, we investigate the effectiveness of multiple zero-shot models that aid radiologists in detecting pulmonary embolisms and identifying intricate lung details like ground glass opacities and consolidations. Our empirical analysis provides an overview of the possible solutions to target such fine-grained tasks, so far overlooked in the medical multimodal pretraining literature. Our investigation promises future advancements in the medical image analysis community by addressing some challenges associated with unstructured data and fine-grained multi-label classification.Comment: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops 202

    Self-Attention and Ingredient-Attention Based Model for Recipe Retrieval from Image Queries

    No full text
    Direct computer vision based-nutrient content estimation is a demanding task, due to deformation and occlusions of ingredients, as well as high intra-class and low inter-class variability between meal classes. In order to tackle these issues, we propose a system for recipe retrieval from images. The recipe information can subsequently be used to estimate the nutrient content of the meal. In this study, we utilize the multi-modal Recipe1M dataset, which contains over 1 million recipes accompanied by over 13 million images. The proposed model can operate as a first step in an automatic pipeline for the estimation of nutrition content by supporting hints related to ingredient and instruction. Through self-attention, our model can directly process raw recipe text, making the upstream instruction sentence embedding process redundant and thus reducing training time, while providing desirable retrieval results. Furthermore, we propose the use of an ingredient attention mechanism, in order to gain insight into which instructions, parts of instructions or single instruction words are of importance for processing a single ingredient within a certain recipe. Attention-based recipe text encoding contributes to solving the issue of high intra-class/low inter-class variability by focusing on preparation steps specific to the meal. The experimental results demonstrate the potential of such a system for recipe retrieval from images. A comparison with respect to two baseline methods is also presented

    Gradual Fine-Tuning for accurate Blood Glucose Level Prediction

    No full text
    Background and Aims For individuals with Type 1 diabetes (T1D) is of eminence importance to avoid hypo- and hyperglycemic events. The availability of long glucose time-series along with powerful AI methods allowed the development of glucose prediction algorithms. Nonetheless open issues remain such as prediction time-delays, amount of history needed, and how heterogeneous and sparse diabetes information affect the performance. Method In this study, we utilized data from 100 individuals with T1D provided by the Juvenile Diabetes Research Foundation. The dataset provides pump settings, sensor outputs (e.g. insulin-rates, continuous glucose monitoring- CGM) and conceptual information such as age, years of diabetes. To mitigate the adverse impact of large inter-patient variability, we propose a training scheme based on gradual fine-tuning. Initially, the novel AI-model is trained on all data and subsequently fine-tuned over groups with shared characteristics to individual patient-level. The individuals with T1D are assigned to groups based on similarity measures defined using glucose variability indices. For each individual, an ensemble of five dedicated sequence-to-sequence LSTM networks is used. The ensemble uses CGM data, bolus dose and meal intake as input and outputs blood glucose predictions 30 min ahead in time. Results As shown in Table 1 the root-mean-square-error (RMSE), mean-average-error (MAE), and time-lag used as performance measures for the various training schemes

    A comprehensive review of imaging findings in COVID-19 - status in early 2021.

    Get PDF
    Medical imaging methods are assuming a greater role in the workup of patients with COVID-19, mainly in relation to the primary manifestation of pulmonary disease and the tissue distribution of the angiotensin-converting-enzyme 2 (ACE 2) receptor. However, the field is so new that no consensus view has emerged guiding clinical decisions to employ imaging procedures such as radiography, computer tomography (CT), positron emission tomography (PET), and magnetic resonance imaging, and in what measure the risk of exposure of staff to possible infection could be justified by the knowledge gained. The insensitivity of current RT-PCR methods for positive diagnosis is part of the rationale for resorting to imaging procedures. While CT is more sensitive than genetic testing in hospitalized patients, positive findings of ground glass opacities depend on the disease stage. There is sparse reporting on PET/CT with [18F]-FDG in COVID-19, but available results are congruent with the earlier literature on viral pneumonias. There is a high incidence of cerebral findings in COVID-19, and likewise evidence of gastrointestinal involvement. Artificial intelligence, notably machine learning is emerging as an effective method for diagnostic image analysis, with performance in the discriminative diagnosis of diagnosis of COVID-19 pneumonia comparable to that of human practitioners

    A Deep-Learning Diagnostic Support System for the Detection of COVID-19 Using Chest Radiographs: A Multireader Validation Study.

    No full text
    The aim of this study was to compare a diagnosis support system to detect COVID-19 pneumonia on chest radiographs (CXRs) against radiologists of various levels of expertise in chest imaging. MATERIALS AND METHODS Five publicly available databases comprising normal CXR, confirmed COVID-19 pneumonia cases, and other pneumonias were used. After the harmonization of the data, the training set included 7966 normal cases, 5451 with other pneumonia, and 258 CXRs with COVID-19 pneumonia, whereas in the testing data set, each category was represented by 100 cases. Eleven blinded radiologists with various levels of expertise independently read the testing data set. The data were analyzed separately with the newly proposed artificial intelligence-based system and by consultant radiologists and residents, with respect to positive predictive value (PPV), sensitivity, and F-score (harmonic mean for PPV and sensitivity). The χ test was used to compare the sensitivity, specificity, accuracy, PPV, and F-scores of the readers and the system. RESULTS The proposed system achieved higher overall diagnostic accuracy (94.3%) than the radiologists (61.4% ± 5.3%). The radiologists reached average sensitivities for normal CXR, other type of pneumonia, and COVID-19 pneumonia of 85.0% ± 12.8%, 60.1% ± 12.2%, and 53.2% ± 11.2%, respectively, which were significantly lower than the results achieved by the algorithm (98.0%, 88.0%, and 97.0%; P < 0.00032). The mean PPVs for all 11 radiologists for the 3 categories were 82.4%, 59.0%, and 59.0% for the healthy, other pneumonia, and COVID-19 pneumonia, respectively, resulting in an F-score of 65.5% ± 12.4%, which was significantly lower than the F-score of the algorithm (94.3% ± 2.0%, P < 0.00001). When other pneumonia and COVID-19 pneumonia cases were pooled, the proposed system reached an accuracy of 95.7% for any pathology and the radiologists, 88.8%. The overall accuracy of consultants did not vary significantly compared with residents (65.0% ± 5.8% vs 67.4% ± 4.2%); however, consultants detected significantly more COVID-19 pneumonia cases (P = 0.008) and less healthy cases (P < 0.00001). CONCLUSIONS The system showed robust accuracy for COVID-19 pneumonia detection on CXR and surpassed radiologists at various training levels
    corecore